Describe the work you have done this week and summarize your learning.
This chapter’s dataset consists of housing values in suburbs of Boston (the Boston data from the MASS package).
# access the MASS package
library(MASS)
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The dataset contains 506 observations in 14 variables:
The function pairs gives a rough visual idea of the data while summary describes the variables numerically.
library(dplyr)
library(corrplot)
pairs(Boston) # done as in Data Camp, but it is so small, almost impossible to see anything
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Based on the numerical summary, it seems variables crim, zn, indus,dis, rad have quite low values while rm, age and black seem to have higher values. The output of pairs is very difficult to read, so let’s try calculating the correlation matrix to see the relationships between the variables. The (rather large) correlation matrix is easier to interpret when visualized with corrplot-function.
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston)
# print the correlation matrix, kable for nicer looking table
knitr::kable(
cor_matrix %>% round(digits=2)
)
| crim | zn | indus | chas | nox | rm | age | dis | rad | tax | ptratio | black | lstat | medv | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| crim | 1.00 | -0.20 | 0.41 | -0.06 | 0.42 | -0.22 | 0.35 | -0.38 | 0.63 | 0.58 | 0.29 | -0.39 | 0.46 | -0.39 |
| zn | -0.20 | 1.00 | -0.53 | -0.04 | -0.52 | 0.31 | -0.57 | 0.66 | -0.31 | -0.31 | -0.39 | 0.18 | -0.41 | 0.36 |
| indus | 0.41 | -0.53 | 1.00 | 0.06 | 0.76 | -0.39 | 0.64 | -0.71 | 0.60 | 0.72 | 0.38 | -0.36 | 0.60 | -0.48 |
| chas | -0.06 | -0.04 | 0.06 | 1.00 | 0.09 | 0.09 | 0.09 | -0.10 | -0.01 | -0.04 | -0.12 | 0.05 | -0.05 | 0.18 |
| nox | 0.42 | -0.52 | 0.76 | 0.09 | 1.00 | -0.30 | 0.73 | -0.77 | 0.61 | 0.67 | 0.19 | -0.38 | 0.59 | -0.43 |
| rm | -0.22 | 0.31 | -0.39 | 0.09 | -0.30 | 1.00 | -0.24 | 0.21 | -0.21 | -0.29 | -0.36 | 0.13 | -0.61 | 0.70 |
| age | 0.35 | -0.57 | 0.64 | 0.09 | 0.73 | -0.24 | 1.00 | -0.75 | 0.46 | 0.51 | 0.26 | -0.27 | 0.60 | -0.38 |
| dis | -0.38 | 0.66 | -0.71 | -0.10 | -0.77 | 0.21 | -0.75 | 1.00 | -0.49 | -0.53 | -0.23 | 0.29 | -0.50 | 0.25 |
| rad | 0.63 | -0.31 | 0.60 | -0.01 | 0.61 | -0.21 | 0.46 | -0.49 | 1.00 | 0.91 | 0.46 | -0.44 | 0.49 | -0.38 |
| tax | 0.58 | -0.31 | 0.72 | -0.04 | 0.67 | -0.29 | 0.51 | -0.53 | 0.91 | 1.00 | 0.46 | -0.44 | 0.54 | -0.47 |
| ptratio | 0.29 | -0.39 | 0.38 | -0.12 | 0.19 | -0.36 | 0.26 | -0.23 | 0.46 | 0.46 | 1.00 | -0.18 | 0.37 | -0.51 |
| black | -0.39 | 0.18 | -0.36 | 0.05 | -0.38 | 0.13 | -0.27 | 0.29 | -0.44 | -0.44 | -0.18 | 1.00 | -0.37 | 0.33 |
| lstat | 0.46 | -0.41 | 0.60 | -0.05 | 0.59 | -0.61 | 0.60 | -0.50 | 0.49 | 0.54 | 0.37 | -0.37 | 1.00 | -0.74 |
| medv | -0.39 | 0.36 | -0.48 | 0.18 | -0.43 | 0.70 | -0.38 | 0.25 | -0.38 | -0.47 | -0.51 | 0.33 | -0.74 | 1.00 |
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex=0.6)
From the correlation matrix we can see that there is a strong
negative correlation between distance and proportion of units built before 1940, nitrogen oxide concentration and proportion of non-retail business acres (dis&age, dis&nox, dis&indus) as well as median value of home and lower status of the population (medv&lstat).
positive correlation between property tax-rate and accessibility to radial high ways (tax&rad) among others.
Standardization of the data is useful when the data has large differences between their ranges or when the data is measured in different measurement units. Let’s scale the Boston data by subtracting the column means from the corresponding columns and divide the difference with standard deviation: \[scaled(x) = \frac{x-mean(x)}{sd(x)}.\] This is one of the most popular way of standardizing data, the Z-score. Now all variables have a mean of zero and a standard deviation of one. Thus they are on the same scale.
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
Next we shall create a categorical variable crime (per capita crime rate by town) from the standardized data set. Let’s cut the variable by quantiles to get the high, low and middle rates of crime into their own categories. Finally, let’s drop the old crime rate variable from the data set.
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low","med_low","med_high","high"))
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
In order to test the predictive power of a statistical method, let’s divide the scaled Boston data set randomly into a training set (80 %) and a test set (20 %).
# number of rows in the Boston dataset
n <- nrow(Boston)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
The standardization was done to satisfy the conditions for using the linear discriminant analysis (LDA):
variables are normally distributed (on condition of the classes)
each variable has the same variance.
The general idea is to reduce the dimensions by removing redundant and dependent features by transforming the features from higher dimensional space to a space with lower dimensions.
Now, let’s fit LDA on the train set with the newly-created crime as the target variable and all other variables as predictor variables. The result can be visualised by the LDA (bi)plot.
# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2574257 0.2549505 0.2475248 0.2400990
##
## Group means:
## zn indus chas nox rm age
## low 0.8961659 -0.8912764 -0.19661560 -0.8700725 0.43187465 -0.8526406
## med_low -0.1038437 -0.2573648 0.03346513 -0.5256372 -0.13228409 -0.3114561
## med_high -0.3740445 0.1964216 0.16075196 0.3455794 0.05359336 0.4092563
## high -0.4872402 1.0149946 -0.06938576 1.0132067 -0.42774590 0.8176832
## dis rad tax ptratio black lstat
## low 0.9169470 -0.6936501 -0.7489599 -0.40183767 0.3712496 -0.7867805
## med_low 0.2926393 -0.5570499 -0.4700890 -0.07812013 0.3186426 -0.1017845
## med_high -0.3412197 -0.4007469 -0.2901269 -0.15498478 0.1049990 0.1043594
## high -0.8604906 1.6596029 1.5294129 0.80577843 -0.7555242 0.8551428
## medv
## low 0.49240849
## med_low -0.01897922
## med_high 0.11386124
## high -0.65886070
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10625047 0.64203105 -0.84646959
## indus -0.01342107 -0.22722192 0.20264181
## chas -0.06956844 -0.17282530 0.13901653
## nox 0.28605665 -0.82191423 -1.50614120
## rm -0.07681302 -0.09178225 -0.15146076
## age 0.30133434 -0.16103353 -0.25232769
## dis -0.07766235 -0.23058033 -0.16284979
## rad 3.20595157 0.98051143 -0.06831616
## tax -0.01259514 -0.02397246 0.64898975
## ptratio 0.16718983 -0.03604918 -0.38197885
## black -0.16633574 -0.01363808 0.13945797
## lstat 0.18898465 -0.40910236 0.27544113
## medv 0.18230193 -0.39617078 -0.39054043
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9514 0.0368 0.0119
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 1)
Here we can see the results of the LDA. Each color represents a class of the target variable. The predictor variables are the arrows in the middle of the picture, the length and the direction of the arrow depicting the effect of the predictor. It seems that here the variables rad, zn and nox discriminate/separate the classes the best.
Now we use the built LDA model to predict the classes on the test data. LDA calculates the probability of the new observation for belonging in each of the classes and then the observation is classified to the class of the highest probability.
First, let’s save the correct classes and then remove the crime variable.
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 15 7 1 0
## med_low 5 15 3 0
## med_high 0 3 23 0
## high 0 0 1 29
The prediction would have been perfect if all the values were on the diagonal. Certainly this is not the case but the largest values are on the diagonal. There seems to be some mixing with the first three classes but the last class (high) is most correctly predicted. This was to be expected based on the training set figure.
Different distances (e.g. Euclidean or Manhattan) are used to see if observations are similar or dissimilar with each other. Similar observations form clusters which can be found by different methods (e.g. k-means).
Let’s find clusters on the Boston dataset using k-means. First, let’s reload the dataset and standardize it to get comparable distances (Euclidean and Manhattan). Then let’s run the k-means algorithm on the dataset.
# reload Boston from MASS
library(MASS)
library(ggplot2)
data('Boston');
# center and standardize variables
boston_scaled <- scale(Boston)
# euclidean distance matrix
dist_eu <- dist(boston_scaled)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_scaled,method="manhattan")
# look at the summary of the distances
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
On can see that the Manhattan distance gives much larger values compared to the Euclidean distnace. For now, however, let’s use the Euclidean distance.
# k-means clustering
km1 <-kmeans(boston_scaled, centers = 1)
km2 <-kmeans(boston_scaled, centers = 2)
km3 <-kmeans(boston_scaled, centers = 3)
km4 <-kmeans(boston_scaled, centers = 4)
# plot the Boston dataset with clusters
pairs(boston_scaled, col = km1$cluster) # 1 cluster
pairs(boston_scaled, col = km2$cluster) # 2 clusters
pairs(boston_scaled, col = km3$cluster) # 3 clusters
pairs(boston_scaled, col = km4$cluster) # 4 clusters
# too general view, make smaller
pairs(boston_scaled[,6:10], col = km2$cluster) # 2 clusters
pairs(boston_scaled[,6:10], col = km3$cluster) # 3 clusters
pairs(boston_scaled[,6:10], col = km4$cluster) # 4 clusters
Different number of centers (1,2,3 or 4) were used for k-means clustering. One cluster seemed to be too few, since new clusters started appearing, however, four clusters did not bring a dramatic difference to the game (i.e. the centroid and the clusters did not change). Thus the optimal number was found to be 2 or 3 clusters.
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')+scale_x_continuous(breaks = 1:10,labels=1:10)
The total within sum of squares (TWSS) would indicate 2 be the optimal number, since that is the number when TWSS changes radically (from 1 to 2).
# Run the code below for the (scaled) train data that you used to fit the LDA.
#The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# Next, install and access the plotly package. Create a 3D plot (**Cool!**)
# of the columns of the matrix product by typing the code below."
library(plotly)
# Note! To install plotly in Linux, remember to install libcurl from terminal.
# * deb: libcurl4-openssl-dev (Debian, Ubuntu, etc)
# * rpm: libcurl-devel (Fedora, CentOS, RHEL)
# * csw: libcurl_dev (Solaris)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.